4 research outputs found

    Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications

    Get PDF
    Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from n-dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.This Project received funding from the European Union’s Horizon 2020 research and Innovation Programme under grant agreement No. 780488

    Camera Pose Optimization for 3D Mapping

    Get PDF
    Digital 3D models of environments are of great value in many applications, but the algorithms that build them autonomously are computationally expensive and require a considerable amount of time to perform this task. In this work, we present an active simultaneous localisation and mapping system that optimises the pose of the sensor for the 3D reconstruction of an environment, while a 2D Rapidly-Exploring Random Tree algorithm controls the motion of the mobile platform for the ground exploration strategy. Our objective is to obtain a 3D map comparable to that obtained using a complete 3D approach in a time interval of the same order of magnitude of a 2D exploration algorithm. The optimisation is performed using a ray-tracing technique from a set of candidate poses based on an uncertainty octree built during exploration, whose values are calculated according to where they have been viewed from. The system is tested in diverse simulated environments and compared with two different exploration methods from the literature, one based on 2D and another one that considers the complete 3D space. Experiments show that combining our algorithm with a 2D exploration method, the 3D map obtained is comparable in quality to that obtained with a pure 3D exploration procedure, but demanding less time.This work was supported in part by the Project ‘‘5R-Red Cervera de Tecnologías Robóticas en Fabricación Inteligente,’’ through the ‘‘Centros Tecnológicos de Excelencia Cervera’’ Program funded by the ‘‘Centre for the Development of Industrial Technology (CDTI),’’ under Contract CER-20211007

    Active Mapping and Robot Exploration: A Survey

    Get PDF
    Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.This research was funded by the ELKARTEK project ELKARBOT KK-2020/00092 of the Basque Government

    Dynamic mosaic planning for a robotic bin-packing system based on picked part and target box monitoring

    Get PDF
    This paper describes the dynamic mosaic planning method developed in the context of the PICKPLACE European project. The dynamic planner has allowed the development of a robotic system capable of packing a wide variety of objects without having to adjust to each reference. The mosaic planning system consists of three modules: First, the picked item monitoring module monitors the grabbed item to find out how the robot has picked it. At the same time, the destination container is monitored online to obtain the actual status of the packaging. To this end, we present a novel heuristic algorithm that, based on the point cloud of the scene, estimates the empty volume inside the container as empty maximal spaces (EMS). Finally, we present the development of the dynamic IK-PAL mosaic planner that allows us to dynamically estimate the optimal packing pose considering both the status of the picked part and the estimated EMSs. The developed method has been successfully integrated in a real robotic picking and packing system and validated with 7 tests of increasing complexity. In these tests, we demonstrate the flexibility of the presented system in handling a wide range of objects in a real dynamic packaging environment. To our knowledge, this is the first time that a complete online picking and packing system is deployed in a real robotic scenario allowing to create mosaics with arbitrary objects and to consider the dynamics of a real robotic packing system.This article has been funded by the European Union's Horizon 2020 research and Innovation Programme under grant agreement No. 780488, and the project "5R-Red Cervera de Tecnologias roboticas en fabricacion inteligente", contract number CER-20211007, under "Centros Tecnologicos de Excelencia Cervera" programme funded by "The Centre for the Development of Industrial Technology (CDTI)"
    corecore